Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Fisler, Kathi; Denny, Paul; Franklin, Diana; Hamilton, Margaret (Ed.)Background: Prior work has primarily been concerned with identifying: (1) how Open Education Resources (OERs) can be used to increase the availability of educational materials, (2) what mo- tivations are behind their adoption and usage in classrooms, and (3) what barriers impede said adoption. However, there is relatively little work investigating the motives and barriers to contribution in OER. Objectives: Our goal is to understand what motivates and dissuades instructors to contribute to and adopt OERs. Additionally, we wish to know what would increase the likelihood of instructors contributing their work to OER repositories. Method: We conduct a 10 question survey with computing instructors on OER, with a heavy emphasis on what would lead to OER contributions. Using thematic analysis, we mine the broad themes from our respondents and group them into broader topical areas. Findings: Novel contributions include discussions of what faculty are not willing to share as readily — in particular, exam questions are of concern due to possible student cheating — as well as discussions of different views on monetary and non-monetary (e.g., promotion and tenure value) incentives for contributing to OER efforts. With respect to the kinds of OER faculty want to use, findings line up with prior literature. Implications: As course materials become more sophisticated and the range of topics taught in computing continue to grow, the communal effort required to maintain a broad collection of high quality OERs also grows. Understanding what factors influence instructors to contribute to this effort and how we can facilitate the contribution, discovery, and use of OERs is fundamental to both how OER repositories should be organized, as well as how funding initiatives to support them should be structured.more » « less
- 
            null (Ed.)Proctoring educational assessments (e.g., quizzes and exams) has a cost, be it in faculty (and/or course staff) time or in money to pay for proctoring services. Previous estimates of the utility of proctoring (generally by estimating the score advantage of taking an exam without proctoring) vary widely and have mostly been implemented using an across subjects experimental designs and sometimes with low statistical power. We investigated the score advantage of unproctored exams versus proctored exams using a within-subjects design for N = 510 students in an on-campus introductory programming course with 5 proctored exams and 4 unproctored exams. We found that students scored 3.32 percentage points higher on questions on unproctored exams than on proctored exams (p < 0.001). More interestingly, however, we discovered that this score advantage on unproctored exams grew steadily as the semester progressed, from around 0 percentage points at the start of semester to around 7 percentage points by the end. As the most obvious explanation for this advantage is cheating, we refer to this behavior as the student population "learning to cheat". The data suggests that both more individuals are cheating and the average benefit of cheating is increasing over the course of the semester. Furthermore, we observed that studying for unproctored exams decreased over the course of the semester while studying for proctored exams stayed constant. Lastly, we estimated the score advantage by question type and found that our long-form programming questions had the highest score advantage on unproctored exams, but there are multiple possible explanations for this finding.more » « less
- 
            null (Ed.)We describe the deployment of an imperfect NLP-based automatic short answer grading system on an exam in a large-enrollment introductory college course. We characterize this deployment as both high stakes (the questions were on an mid-term exam worth 10% of students’ final grade) and high transparency (the question was graded interactively during the computer-based exam and correct solutions were shown to students that could be compared to their answer). We study two techniques designed to mitigate the potential student dissatisfaction resulting from students incorrectly not granted credit by the imperfect AI grader. We find (1) that providing multiple attempts can eliminate first-attempt false negatives at the cost of additional false positives, and (2) that students not granted credit from the algorithm cannot reliably determine if their answer was mis-scored.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
